48 research outputs found

    Spectral gaps and error estimates for infinite-dimensional Metropolis-Hastings with non-Gaussian priors

    Full text link
    We study a class of Metropolis-Hastings algorithms for target measures that are absolutely continuous with respect to a large class of non-Gaussian prior measures on Banach spaces. The algorithm is shown to have a spectral gap in a Wasserstein-like semimetric weighted by a Lyapunov function. A number of error bounds are given for computationally tractable approximations of the algorithm including bounds on the closeness of Ces\'{a}ro averages and other pathwise quantities via perturbation theory. Several applications illustrate the breadth of problems to which the results apply such as discretization by Galerkin-type projections and approximate simulation of the proposal

    Two Metropolis-Hastings Algorithms for Posterior Measures with Non-Gaussian Priors in Infinite Dimensions

    Get PDF
    We introduce two classes of Metropolis--Hastings algorithms for sampling target measures that are absolutely continuous with respect to non-Gaussian prior measures on infinite-dimensional Hilbert spaces. In particular, we focus on certain classes of prior measures for which prior-reversible proposal kernels of the autoregressive type can be designed. We then use these proposal kernels to design algorithms that satisfy detailed balance with respect to the target measures. Afterwards, we introduce a new class of prior measures, called the Bessel-K priors, as a generalization of the gamma distribution to measures in infinite dimensions. The Bessel-K priors interpolate between well-known priors such as the gamma distribution and Besov priors and can model sparse or compressible parameters. We present concrete instances of our algorithms for the Bessel-K priors in the context of numerical examples in density estimation, finite-dimensional denoising, and deconvolution on the circle

    Model Reduction and Neural Networks for Parametric PDEs

    Get PDF
    We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature

    Geometric structure of graph Laplacian embeddings

    Get PDF
    We analyze the spectral clustering procedure for identifying coarse structure in a data set x₁,…,x_n, and in particular study the geometry of graph Laplacian embeddings which form the basis for spectral clustering algorithms. More precisely, we assume that the data is sampled from a mixture model supported on a manifold M embedded in R^d, and pick a connectivity length-scale ε>0 to construct a kernelized graph Laplacian. We introduce a notion of a well-separated mixture model which only depends on the model itself, and prove that when the model is well separated, with high probability the embedded data set concentrates on cones that are centered around orthogonal vectors. Our results are meaningful in the regime where ε=ε(n) is allowed to decay to zero at a slow enough rate as the number of data points grows. This rate depends on the intrinsic dimension of the manifold on which the data is supported
    corecore